2,334 research outputs found
GPUs outperform current HPC and neuromorphic solutions in terms of speed and energy when simulating a highly-connected cortical model
While neuromorphic systems may be the ultimate platform for deploying spiking neural networks (SNNs), their distributed nature and optimisation for specific types of models makes them unwieldy tools for developing them. Instead, SNN models tend to be developed and simulated on computers or clusters of computers with standard von Neumann CPU architectures. Over the last decade, as well as becoming a common fixture in many workstations, NVIDIA GPU accelerators have entered the High Performance Computing field and are now used in 50% of the Top 10 super computing sites worldwide. In this paper we use our GeNN code generator to re-implement two neo-cortex-inspired, circuit-scale, point neuron network models on GPU hardware. We verify the correctness of our GPU simulations against prior results obtained with NEST running on traditional HPC hardware and compare the performance with respect to speed and energy consumption against published data from CPU-based HPC and neuromorphic hardware. A full-scale model of a cortical column can be simulated at speeds approaching 0.5× real-time using a single NVIDIA Tesla V100 accelerator – faster than is currently possible using a CPU based cluster or the SpiNNaker neuromorphic system. In addition, we find that, across a range of GPU systems, the energy to solution as well as the energy per synaptic event of the microcircuit simulation is as much as 14× lower than either on SpiNNaker or in CPU-based simulations. Besides performance in terms of speed and energy consumption of the simulation, efficient initialisation of models is also a crucial concern, particularly in a research context where repeated runs and parameter-space exploration are required. Therefore, we also introduce in this paper some of the novel parallel initialisation methods implemented in the latest version of GeNN and demonstrate how they can enable further speed and energy advantages
Synapse-Centric mapping of cortical models to the spiNNaker neuromorphic architecture
While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously
Recommended from our members
Larger GPU-accelerated brain simulations with procedural connectivity
Simulations are an important tool for investigating brain function but large models are needed to faithfully reproduce the statistics and dynamics of brain activity. Simulating large spiking neural network models has, until now, needed so much memory for storing synaptic connections that it required high performance computer systems. Here, we present an alternative simulation method we call `procedural connectivity' where connectivity and synaptic weights are generated `on the fly' instead of stored and retrieved from memory. This method is particularly well-suited for use on Graphical Processing Units (GPUs) − which are a common fixture in many workstations. Using procedural connectivity and an additional GPU code generation optimisation, we can simulate a recent model of the Macaque visual cortex with 4.13 million neurons and 24.2 billion synapses on a single GPU − a significant step forward in making large-scale brain modelling accessible to more researchers
Recommended from our members
Efficient GPU training of LSNNs using eProp
Taking inspiration from machine learning libraries - where techniques such as parallel batch training minimise latency and maximise GPU occupancy - as well as our previous research on efficiently simulating Spiking Neural Networks (SNNs) on GPUs for computational neuroscience, we have extended our GeNN SNN simulator to enable spike-based machine learning research on general purpose hardware. We demonstrate that SNN classifiers implemented using GeNN and trained using the eProp learning rule can provide comparable performance to those trained using Back Propagation Through Time and show that the latency and energy usage of our SNN classifiers is up to 7 × lower than an LSTM running on the same GPU hardware
Expectations and Experiences of Short-Term Study Abroad Leadership Teams
This paper explores the expectations and experiences of faculty, academic advisors, and graduate students leading a study abroad experience for first-year engineering students. In the current age of globalization, engineering students require a global understanding of engineering to be competent in the global workforce. In response, undergraduate engineering programs have created various programs to fill this student need. The research surrounding these initiatives focuses on the student experience but is limited when describing that of program leaders. This qualitative study draws from track leader journals that were completed during and shortly after the international program as well as semi-structured interviews in the following semester. The findings suggest that the majority of leaders expected their role to be that of an educator on the study abroad experience, but upon reflection, realized that their definition of what it means to be an educator expanded to encompass facilitation of learning. Many of the student learning instances leaders pointed to had to do with facilitating a learning environment rather than delivering content or answering technical questions. The roles described by leaders varied from troubleshooter to behavioral manager to informer. Leaders reflected that their roles developed as they met students where they were in their learning within the dynamic international context of the program. Overall, leaders saw their roles evolve over the course of the trip. The findings shed light on emergent power dynamics that leadership teams engage in outside of the formal learning environment and provide a unique insight into the types of learning program leaders can experience through leading study abroad programs. The multiple forms of data collection provide deeper insights into the experiences of the leaders while encouraging them to also reflect in real-time. This study has implications for the development of intentionally designed, condensed study-abroad experiences that draws from understanding the program leaders’ experience
Recommended from our members
Metabolome-Informed Microbiome Analysis Refines Metadata Classifications and Reveals Unexpected Medication Transfer in Captive Cheetahs.
Even high-quality collection and reporting of study metadata in microbiome studies can lead to various forms of inadvertently missing or mischaracterized information that can alter the interpretation or outcome of the studies, especially with nonmodel organisms. Metabolomic profiling of fecal microbiome samples can provide empirical insight into unanticipated confounding factors that are not possible to obtain even from detailed care records. We illustrate this point using data from cheetahs from the San Diego Zoo Safari Park. The metabolomic characterization indicated that one cheetah had to be moved from the non-antibiotic-exposed group to the antibiotic-exposed group. The detection of the antibiotic in this second cheetah was likely due to grooming interactions with the cheetah that was administered antibiotics. Similarly, because transit time for stool is variable, fecal samples within the first few days of antibiotic prescription do not all contain detected antibiotics, and the microbiome is not yet affected. These insights significantly altered the way the samples were grouped for analysis (antibiotic versus no antibiotic) and the subsequent understanding of the effect of the antibiotics on the cheetah microbiome. Metabolomics also revealed information about numerous other medications and provided unexpected dietary insights that in turn improved our understanding of the molecular patterns on the impact on the community microbial structure. These results suggest that untargeted metabolomic data provide empirical evidence to correct records and aid in the monitoring of the health of nonmodel organisms in captivity, although we also expect that these methods may be appropriate for other social animals, such as cats.IMPORTANCE Metabolome-informed analyses can enhance omics studies by enabling the correct partitioning of samples by identifying hidden confounders inadvertently misrepresented or omitted from carefully curated metadata. We demonstrate here the utility of metabolomics in a study characterizing the microbiome associated with liver disease in cheetahs. Metabolome-informed reinterpretation of metagenome and metabolome profiles factored in an unexpected transfer of antibiotics, preventing misinterpretation of the data. Our work suggests that untargeted metabolomics can be used to verify, augment, and correct sample metadata to support improved grouping of sample data for microbiome analyses, here for nonmodel organisms in captivity. However, the techniques also suggest a path forward for correcting clinical information in microbiome studies more broadly to enable higher-precision analyses
Study of aluminoborane compound AlB_4H_(11) for hydrogen storage
Aluminoborane compounds AlB_4H_(11), AlB_5H_(12), and AlB_6H_(13) were reported by Himpsl and Bond in 1981, but they have eluded the attention of the worldwide hydrogen storage research community for more than a quarter of a century. These aluminoborane compounds have very attractive properties for hydrogen storage: high hydrogen capacity (i.e., 13.5, 12.9, and 12.4 wt % H, respectively) and attractive hydrogen desorption temperature (i.e., AlB_4H_(11) decomposes at ~125 °C). We have synthesized AlB_4H_(11) and studied its thermal desorption behavior using temperature-programmed desorption with mass spectrometry, gas volumetric (Sieverts) measurement, infrared (IR) spectroscopy, and solid state nuclear magnetic resonance (NMR). Rehydrogenation of hydrogen-desorbed products was performed and encouraging evidence of at least partial reversibility for hydrogenation at relatively mild conditions is observed. Our chemical analysis indicates that the formula for the compound is closer to AlB_4H_(12) than AlB_4H_(11)
Recommended from our members
PyGeNN: a Python library for GPU-enhanced Neural Networks
More than half of the Top 10 supercomputing sites worldwide use GPU accelerators and they are becoming ubiquitous in workstations and edge computing devices. GeNN is a C++ library for generating efficient spiking neural network simulation code for GPUs. However, until now, the full flexibility of GeNN could only be harnessed by writing model descriptions and simulation code in C++. Here we present PyGeNN, a Python package which exposes all of GeNN's functionality to Python with minimal overhead. This provides an alternative, arguably more user-friendly, way of using GeNN and allows modelers to use GeNN within the growing Python-based machine learning and computational neuroscience ecosystems. In addition, we demonstrate that, in both Python and C++ GeNN simulations, the overheads of recording spiking data can strongly affect runtimes and show how a new spike recording system can reduce these overheads by up to 10×. Using the new recording system, we demonstrate that by using PyGeNN on a modern GPU, we can simulate a full-scale model of a cortical column faster even than real-time neuromorphic systems. Finally, we show that long simulations of a smaller model with complex stimuli and a custom three-factor learning rule defined in PyGeNN can be simulated almost two orders of magnitude faster than real-time
- …